53 research outputs found
SeGAN: Segmenting and Generating the Invisible
Objects often occlude each other in scenes; Inferring their appearance beyond
their visible parts plays an important role in scene understanding, depth
estimation, object interaction and manipulation. In this paper, we study the
challenging problem of completing the appearance of occluded objects. Doing so
requires knowing which pixels to paint (segmenting the invisible parts of
objects) and what color to paint them (generating the invisible parts). Our
proposed novel solution, SeGAN, jointly optimizes for both segmentation and
generation of the invisible parts of objects. Our experimental results show
that: (a) SeGAN can learn to generate the appearance of the occluded parts of
objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the
invisible parts of objects; (c) trained on synthetic photo realistic images,
SeGAN can reliably segment natural images; (d) by reasoning about occluder
occludee relations, our method can infer depth layering.Comment: Accepted to CVPR18 as spotligh
Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images
In this paper, we study the challenging problem of predicting the dynamics of
objects in static images. Given a query object in an image, our goal is to
provide a physical understanding of the object in terms of the forces acting
upon it and its long term motion as response to those forces. Direct and
explicit estimation of the forces and the motion of objects from a single image
is extremely challenging. We define intermediate physical abstractions called
Newtonian scenarios and introduce Newtonian Neural Network () that learns
to map a single image to a state in a Newtonian scenario. Our experimental
evaluations show that our method can reliably predict dynamics of a query
object from a single image. In addition, our approach can provide physical
reasoning that supports the predicted dynamics in terms of velocity and force
vectors. To spur research in this direction we compiled Visual Newtonian
Dynamics (VIND) dataset that includes 6806 videos aligned with Newtonian
scenarios represented using game engines, and 4516 still images with their
ground truth dynamics
Interactron: Embodied Adaptive Object Detection
Over the years various methods have been proposed for the problem of object
detection. Recently, we have witnessed great strides in this domain owing to
the emergence of powerful deep neural networks. However, there are typically
two main assumptions common among these approaches. First, the model is trained
on a fixed training set and is evaluated on a pre-recorded test set. Second,
the model is kept frozen after the training phase, so no further updates are
performed after the training is finished. These two assumptions limit the
applicability of these methods to real-world settings. In this paper, we
propose Interactron, a method for adaptive object detection in an interactive
setting, where the goal is to perform object detection in images observed by an
embodied agent navigating in different environments. Our idea is to continue
training during inference and adapt the model at test time without any explicit
supervision via interacting with the environment. Our adaptive object detection
model provides a 11.8 point improvement in AP (and 19.1 points in AP50) over
DETR, a recent, high-performance object detector. Moreover, we show that our
object detection model adapts to environments with completely different
appearance characteristics, and its performance is on par with a model trained
with full supervision for those environments. The code is available at:
https://github.com/allenai/interactron .Comment: CVPR 202
Complexity of Representation and Inference in Compositional Models with Part Sharing
This paper performs a complexity analysis of a class of serial and parallel compositional models of multiple objects and shows that they enable efficient representation and rapid inference. Compositional models are generative and represent objects in a hierarchically distributed manner in terms of parts and subparts, which are constructed recursively by part-subpart compositions. Parts are represented more coarsely at higher level of the hierarchy, so that the upper levels give coarse summary descriptions (e.g., there is a horse in the image) while the lower levels represents the details (e.g., the positions of the legs of the horse). This hierarchically distributed representation obeys the executive summary principle, meaning that a high level executive only requires a coarse summary description and can, if necessary, get more details by consulting lower level executives. The parts and subparts are organized in terms of hierarchical dictionaries which enables part sharing between different objects allowing efficient representation of many objects. The first main contribution of this paper is to show that compositional models can be mapped onto a parallel visual architecture similar to that used by bio-inspired visual models such as deep convolutional networks but more explicit in terms of representation, hence enabling part detection as well as object detection, and suitable for complexity analysis. Inference algorithms can be run on this architecture to exploit the gains caused by part sharing and executive summary. Effectively, this compositional architecture enables us to perform exact inference simultaneously over a large class of generative models of objects.The second contribution is an analysis of the complexity of compositional models in terms of computation time (for serial computers) and numbers of nodes (e.g., ``neurons") for parallel computers. In particular, we compute the complexity gains by part sharing and executive summary and their dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level of the hierarchy, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can enable linear processing on parallel computers.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216 and also by ARO 62250-CS
Human-Machine CRFs for Identifying Bottlenecks in Holistic Scene Understanding
Recent trends in image understanding have pushed for holistic scene
understanding models that jointly reason about various tasks such as object
detection, scene recognition, shape analysis, contextual reasoning, and local
appearance based classifiers. In this work, we are interested in understanding
the roles of these different tasks in improved scene understanding, in
particular semantic segmentation, object detection and scene recognition.
Towards this goal, we "plug-in" human subjects for each of the various
components in a state-of-the-art conditional random field model. Comparisons
among various hybrid human-machine CRFs give us indications of how much "head
room" there is to improve scene understanding by focusing research efforts on
various individual tasks
ENTL: Embodied Navigation Trajectory Learner
We propose Embodied Navigation Trajectory Learner (ENTL), a method for
extracting long sequence representations for embodied navigation. Our approach
unifies world modeling, localization and imitation learning into a single
sequence prediction task. We train our model using vector-quantized predictions
of future states conditioned on current states and actions. ENTL's generic
architecture enables the sharing of the the spatio-temporal sequence encoder
for multiple challenging embodied tasks. We achieve competitive performance on
navigation tasks using significantly less data than strong baselines while
performing auxiliary tasks such as localization and future frame prediction (a
proxy for world modeling). A key property of our approach is that the model is
pre-trained without any explicit reward signal, which makes the resulting model
generalizable to multiple tasks and environments
- …